IS

Zheng, Zhiqiang (Eric)

Topic Weight Topic Terms
0.499 reviews product online review products wom consumers consumer ratings sales word-of-mouth impact reviewers word using
0.457 data used develop multiple approaches collection based research classes aspect single literature profiles means crowd
0.387 performance firm measures metrics value relationship firms results objective relationships firm's organizational traffic measure market
0.338 dynamic time dynamics model change study data process different changes using longitudinal understanding decisions develop
0.301 data classification statistical regression mining models neural methods using analysis techniques performance predictive networks accuracy
0.294 online users active paper using increasingly informational user data internet overall little various understanding empirical
0.255 structural pls measurement modeling equation research formative squares partial using indicators constructs construct statistical models
0.252 empirical model relationships causal framework theoretical construct results models terms paper relationship based argue proposed
0.250 theory theories theoretical paper new understanding work practical explain empirical contribution phenomenon literature second implications
0.238 firms firm financial services firm's size examine new based result level including results industry important
0.229 satisfaction information systems study characteristics data results using user related field survey empirical quality hypotheses
0.202 data predictive analytics sharing big using modeling set power inference behavior explanatory related prediction statistical
0.202 health healthcare medical care patient patients hospital hospitals hit health-care telemedicine systems records clinical practices
0.200 systems information management development presented function article discussed model personnel general organization described presents finally
0.193 digital divide use access artifacts internet inequality libraries shift library increasingly everyday societies understand world
0.186 consumer consumers model optimal welfare price market pricing equilibrium surplus different higher results strategy quality
0.182 use question opportunities particular identify information grammars researchers shown conceptual ontological given facilitate new little
0.171 mobile telecommunications devices wireless application computing physical voice phones purchases ubiquitous applications conceptualization secure pervasive
0.171 recommendations recommender systems preferences recommendation rating ratings preference improve users frame contextual using frames sensemaking
0.169 e-commerce value returns initiatives market study announcements stock event abnormal companies significant growth positive using
0.162 information types different type sources analysis develop used behavior specific conditions consider improve using alternative
0.157 competitive advantage strategic systems information sustainable sustainability dynamic opportunities capabilities environments environmental turbulence turbulent dynamics
0.150 search information display engine results engines displays retrieval effectiveness relevant process ranking depth searching economics
0.147 impact data effect set propensity potential unique increase matching use selection score results self-selection heterogeneity
0.132 intelligence business discovery framework text knowledge new existing visualization based analyzing mining genetic algorithms related
0.130 research studies issues researchers scientific methodological article conducting conduct advanced rigor researcher methodology practitioner issue
0.124 technology organizational information organizations organization new work perspective innovation processes used technological understanding technologies transformation
0.123 adaptation patterns transition new adjustment different critical occur manner changes adapting concept novel temporary accomplish
0.123 market competition competitive network markets firms products competing competitor differentiation advantage competitors presence dominant structure
0.120 methods information systems approach using method requirements used use developed effective develop determining research determine
0.102 infrastructure information flexibility new paper technology building infrastructures flexible development human creating provide despite challenge

Focal Researcher     Coauthors of Focal Researcher (1st degree)     Coauthors of Coauthors (2nd degree)

Note: click on a node to go to a researcher's profile page. Drag a node to reallocate. Number on the edge is the number of co-authorships.

Pavlou, Paul A. 2 Padmanabhan, Balaji 2 Santos, Brian L. Dos 2 Bardhan, Indranil R. 1
Chen, Hongyu 1 Fader, Peter 1 Fichman, Robert G. 1 Gu, Bin 1
Jabr, Wael 1 Kimbrough, Steven O. 1 Kirksey, Kirk 1 Mookerjee, Vijay S. 1
Oh, Jeong-ha (Cath) 1 Weber, Thomas A. 1
d-separation 1 Bayesian graphs 1 Bayesian networks 1 business value of IT 1
business intelligence 1 causality 1 competitive intelligence 1 competitive measures 1
competition 1 congestive heart failure 1 Data mining 1 digital innovation 1
event study 1 eCRM 1 eWOM 1 financial market evaluation 1
Fundamental and powerful concepts (FPC) 1 healthcare information technologies 1 information technology industry 1 IT and firm performance 1
IT value 1 incomplete data 1 information value 1 IS core course 1
instrument variable 1 latent growth modeling 1 LGM 1 longitudinal data 1
markets and auctions 1 macroeconomic news 1 NBD/Dirichlet 1 observational data 1
Online review 1 paid referrals 1 probability models 1 pedagogy 1
patient readmissions 1 predictive healthcare analytics 1 recommendation system 1 search intermediary 1
structural equation modeling 1 stock price volatility 1 word of mouth 1

Articles (9)

Predictive Analytics for Readmission of Patients with Congestive Heart Failure (Information Systems Research, 2015)
Authors: Abstract:
    Mitigating preventable readmissions, where patients are readmitted for the same primary diagnosis within 30 days, poses a significant challenge to the delivery of high-quality healthcare. Toward this end, we develop a novel, predictive analytics model, termed as the beta geometric Erlang-2 (BG/EG) hurdle model, which predicts the propensity, frequency, and timing of readmissions of patients diagnosed with congestive heart failure (CHF). This unified model enables us to answer three key questions related to the use of predictive analytics methods for patient readmissions: whether a readmission will occur, how often readmissions will occur, and when a readmission will occur. We test our model using a unique data set that tracks patient demographic, clinical, and administrative data across 67 hospitals in North Texas over a four-year period. We show that our model provides superior predictive performance compared to extant models such as the logit, BG/NBD hurdle, and EG hurdle models. Our model also allows us to study the association between hospital usage of health information technologies (IT) and readmission risk. We find that health IT usage, patient demographics, visit characteristics, payer type, and hospital characteristics, are significantly associated with patient readmission risk. We also observe that implementation of cardiology information systems is associated with a reduction in the propensity and frequency of future readmissions, whereas administrative IT systems are correlated with a lower frequency of future readmissions. Our results indicate that patient profiles derived from our model can serve as building blocks for a predictive analytics system to identify CHF patients with high readmission risk.
Latent Growth Modeling for Information Systems: Theoretical Extensions and Practical Applications (Information Systems Research, 2014)
Authors: Abstract:
    This paper presents and extends Latent Growth Modeling (LGM) as a complementary method for analyzing longitudinal data, modeling the process of change over time, testing time-centric hypotheses, and building longitudinal theories. We first describe the basic tenets of LGM and offer guidelines for applying LGM to Information Systems (IS) research, specifically how to pose research questions that focus on change over time and how to implement LGM models to test time-centric hypotheses. Second and more important, we theoretically extend LGM by proposing a <i>model validation</i> criterion, namely “<i>d</i>-<i>separation</i>,” to evaluate <i>why</i> and <i>when</i> LGM works and test its fundamental properties and assumptions. Our <i>d</i>-separation criterion does not rely on any distributional assumptions of the data; it is grounded in the fundamental assumption of the theory of conditional independence. Third, we conduct extensive simulations to examine a multitude of factors that affect LGM performance. Finally, as a practical application, we apply LGM to model the relationship between word-of-mouth communication (online product reviews) and book sales over time with longitudinal 26-week data from Amazon. The paper concludes by discussing the implications of LGM for helping IS researchers develop and test longitudinal theories.
Digital Innovation as a Fundamental and Powerful Concept in the Information Systems Curriculum (MIS Quarterly, 2014)
Authors: Abstract:
    The 50-year march of Moore’s Law has led to the creation of a relatively cheap and increasingly easy-to-use world-wide digital infrastructure of computers, mobile devices, broadband network connections, and advanced application platforms. This digital infrastructure has, in turn, accelerated the emergence of new technologies that enable transformations in how we live and work, how companies organize, and the structure of entire industries.
Know Yourself and Know Your Enemy: An Analysis of Firm Recommendations and Consumer Reviews in a Competitive Environment (MIS Quarterly, 2014)
Authors: Abstract:
    Reviews and product recommendations at online stores have enabled customers to readily evaluate alternative products prior to any purchase. In this context, firms generate recommendations to refer customers to a wider variety of products. They also display customer-generated online reviews to facilitate evaluation of those recommended products. This study integrates these two IT artifacts to investigate consumer choice vis-à-vis competing products. We use a dataset we collected from Amazon.com consisting of books, sales ranks, recommendations, reviews, and reviewers. We derive the granular impact of reviews, product referrals, and reviewer opinions on the dynamics of product sales within a competitive market using comprehensive econometric analyses.
Are New IT-Enabled Investment Opportunities Diminishing for Firms? (Information Systems Research, 2012)
Authors: Abstract:
    Today, few firms could survive for very long without their computer systems. IT has permeated every corner of firms. Firms have reached the current state in their use of IT because IT has provided myriad opportunities for firms to improve performance and, firms have availed themselves of these opportunities. Some have argued, however, that the opportunities for firms to improve their performance through new uses of IT have been declining. Are the opportunities to use IT to improve firm performance diminishing? We sought to answer this question. In this study, we develop a theory and explain the logic behind our empirical analysis; an analysis that employs a different type of event study. Using the volatility of firms' stock prices to news signaling a change in economic conditions, we compare the stock price behavior of firms in the IT industry to firms in the utility and transportation and freight industries. Our analysis of the IT industry as a whole indicates that the opportunities for firms to use IT to improve their performance are not diminishing. However, there are sectors within the IT industry that no longer provide value-enhancing opportunities for firms. We also find that IT products that provided opportunities for firms to create value at one point in time, later become necessities for staying in business. Our results support the key assumption in our work.
From Business Intelligence to Competitive Intelligence: Inferring Competitive Measures Using Augmented Site-Centric Data. (Information Systems Research, 2012)
Authors: Abstract:
    Managers routinely seek to understand firm performance relative to the competitors. Recently, competitive intelligence (CI) has emerged as an important area within business intelligence (BI) where the emphasis is on understanding and measuring a firm's external competitive environment. A requirement of such systems is the availability of the rich data about a firm's competitors, which is typically hard to acquire. This paper proposes a method to incorporate competitive intelligence in BI systems by using less granular and aggregate data, which is usually easier to acquire. We motivate, develop, and validate an approach to infer key competitive measures about customer activities without requiring detailed cross-firm data. Instead, our method derives these competitive measures for online firms from simple "site-centric" data that are commonly available, augmented with aggregate data summaries that may be obtained from syndicated data providers. Based on data provided by comScore Networks, we show empirically that our method performs well in inferring several key diagnostic competitive measures-the penetration, market share, and the share of wallet-for various online retailers.
Toward a Causal Interpretation from Observational Data: A New Bayesian Networks Method for Structural Models with Latent Variables. (Information Systems Research, 2010)
Authors: Abstract:
    Because a fundamental attribute of a good theory is causality, the information systems (IS) literature has strived to infer causality from empirical data, typically seeking causal interpretations from longitudinal, experimental, and panel data that include time precedence. However, such data are not always obtainable and observational (cross-sectional, nonexperimental) data are often the only data available. To infer causality from observational data that are common in empirical IS research, this study develops a new data analysis method that integrates the Bayesian networks (BN) and structural equation modeling (SEM) literatures. Similar to SEM techniques (e.g., LISREL and PLS), the proposed Bayesian networks for latent variables (BN-LV) method tests both the measurement model and the structural model. The method operates in two stages: First, it inductively identifies the most likely LVs from measurement items without prespecifying a measurement model. Second, it compares all the possible structural models among the identified LVs in an exploratory (automated) fashion and it discovers the most likely causal structure. By exploring the causal structural model that is not restricted to linear relationships, BN-LV contributes to the empirical IS literature by overcoming three SEM limitations (Lee, B., A. Barua, A. B. Whinston. 1997. Discovery and representation of causal relationships in MIS research: A methodological framework. MIS Quart. 21(1) 109-136)-lack of causality inference, restrictive model structure, and lack of nonlinearities. Moreover, BN-LV extends the BN literature by (1) overcoming the problem of latent variable identification using observed (raw) measurement items as the only inputs, and (2) enabling the use of ordinal and discrete (Likert-type) data, which are commonly used in empirical IS studies. The BN-LV method is first illustrated and tested with actual empirical data to demonstrate how it can help reconcile competing hypotheses in terms of the direction of causality in a structural model. Second, we conduct a comprehensive simulation study to demonstrate the effectiveness of BN-LV compared to existing techniques in the SEM and BN literatures. The advantages of BN-LV in terms of measurement model construction and structural model discovery are discussed.
A Model of Search Intermediaries and Paid Referrals. (Information Systems Research, 2007)
Authors: Abstract:
    In this paper we pursue three main objectives: (1) to develop a model of an intermediated search market in which matching between consumers and firms takes place primarily via paid referrals; (2) to address the question of designing a suitable mechanism for selling referrals to firms; and (3) to characterize and analyze the firms' bidding strategies given consumers' equilibrium search behavior. To achieve these objectives we develop a two-stage model of search intermediaries in a vertically differentiated product market. In the first stage an intermediary chooses a search engine design that specifies to which extent a firm's search rank is determined by its bid and to which extent it is determined by the product offering's performance. In the second stage, based on the search engine design, competing firms place their open bids to be paid for each referral by the search engine. We find that the revenue-maximizing search engine design bases rankings on a weighted average of product performance and bid amount. Nonzero pure-strategy equilibria of the underlying discontinuous bidding game generally exist but are not robust with respect to noisy clicks in the system. We determine a unique nondegenerate mixed-strategy Nash equilibrium that is robust to noisy clicks, In this equilibrium firms of low product performance fully dissipate their rents, which are appropriated by the search intermediary and the firm with the better product. The firms' expected bid amounts are generally nonmonotonic in product performance and depend on the search engine design parameter. The intermediary's profit-maximizing design choice, by attributing a positive weight to the firms' bids, tends to obfuscate search results and reduce overall consumer surplus compared to the socially optimal design of fully transparent results ranked purely on product performance.
AN EMPIRICAL ANALYSIS OF THE VALUE OF COMPLETE INFORMATION FOR ECRM MODELS. (MIS Quarterly, 2006)
Authors: Abstract:
    Due to the vast amount of user data tracked online, the use of data-based analytical methods is becoming increasingly common for e-businesses. Recently the term analytical eCRM has been used to refer to the use of such methods in the online world. A characteristic of most of the current approaches in eCRM is that they use data collected about users' activities at a single site only and, as we argue in this paper, this can present an incomplete picture of user activity. However, it is possible to obtain a complete picture of user activity from across-site data on users. Such data is expensive, but can be obtained by firms directly from their users or from market data vendors. A critical question is whether such data is worth obtaining, an issue that little prior research has addressed. In this paper, using a data mining approach, we present an empirical analysis of the modeling benefits that can be obtained by having complete information. Our results suggest that the magnitudes of gains that can be obtained from complete data range from a few percentage points to 50 percent, depending on the problem for which it is used and the performance metrics considered. Qualitatively we find that variables related to customer loyalty and browsing intensity are particularly important and these variables are difficult to derive from data collected at a single site. More importantly, we find that a firm has to collect a reasonably large amount of complete data before any benefits can be reaped and caution against acquiring too little data.